6 research outputs found

    Factuality Challenges in the Era of Large Language Models

    Full text link
    The emergence of tools based on Large Language Models (LLMs), such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention. These incredibly useful, natural-sounding tools mark significant advances in natural language generation, yet they exhibit a propensity to generate false, erroneous, or misleading content -- commonly referred to as "hallucinations." Moreover, LLMs can be exploited for malicious applications, such as generating false but credible-sounding content and profiles at scale. This poses a significant challenge to society in terms of the potential deception of users and the increasing dissemination of inaccurate information. In light of these risks, we explore the kinds of technological innovations, regulatory reforms, and AI literacy initiatives needed from fact-checkers, news organizations, and the broader research and policy communities. By identifying the risks, the imminent threats, and some viable solutions, we seek to shed light on navigating various aspects of veracity in the era of generative AI.Comment: Our article offers a comprehensive examination of the challenges and risks associated with Large Language Models (LLMs), focusing on their potential impact on the veracity of information in today's digital landscap

    Hardware by the numbers: startups

    No full text

    The tactics and tropes of the Internet Research Agency

    Get PDF
    This report summarizes a review of an expansive data set of social media posts and metadata provided by Facebook, Twitter, and Alphabet (Google), plus a set of related ata, to serve as evidence for an investigation into the Internet Research Agency (IRA) influence operations. It includes an overview of Russian influence operations, a collection of summary statistics, and a set of key takeaways. Some of the key observations noted in the report include: The threat remains; there is evidence of continued interference operations on several platforms. Although Instagram, a photo and video-sharing social networking site owned by Facebook, was a significant platform for the IRA the report notes that it is not often mentioned as a key battleground on social media. Black-American communities seem to have been specifically targeted, with the IRA focused on “developing Black audiences and recruiting Black Americans as assets.” Voter suppression tactics included [1] malicious misdirection; [2] candidate support redirection; and [3] and voter turnout suppression. Operations included biases for presidential candidates Donald Trump and Hilary Clinton, and other prominent figures including Sens. Ted Cruz, Marco Rubio, Lindsay Graham, John McCain, and Dr. Ben Carson
    corecore